在描述自然语言中的时空事件时,视频标题模型主要依赖于编码器的潜在视觉表示。 Encoder-Decoder模型的最新进展主要参加编码器特征,主要是与解码器的线性交互。然而,对视觉数据的日益增长的模型复杂性鼓励更明确的特征交互,用于微粒信息,目前在视频标题域中不存在。此外,特征聚合方法已经用于通过连接或使用线性层来揭示更丰富的视觉表示。虽然在某种程度上为视频进行了语义重叠的功能集,但这些方法导致客观不匹配和功能冗余。此外,字幕中的多样性是从几种有意义的角度表达一个事件的基本组成部分,目前缺少时间,即视频标题域。为此,我们提出了变化堆叠的本地注意网络(VSLAN),该网络(VSLAN)利用低级别的双线性汇集进行自我细分功能交互,并以折扣方式堆叠多个视频特征流。每个特征堆栈的学习属性都有助于我们所提出的多样性编码模块,然后是解码查询阶段,以便于结束到最终的不同和自然标题,而没有任何明确的属性监督。我们在语法和多样性方面评估MSVD和MSR-VTT数据集的VSLAN。 VSLAN的苹果酒得分优于当前的现成方法,分别在MSVD和MSR-VTT上的$ 4.5 \%$ 4.8 \%$。在同一数据集上,VSLAN在标题分集度量中实现了竞争力。
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
Accurate recognition of food items along with quality assessment is of paramount importance in the agricultural industry. Such automated systems can speed up the wheel of the food processing sector and save tons of manual labor. In this connection, the recent advancement of Deep learning-based architectures has introduced a wide variety of solutions offering remarkable performance in several classification tasks. In this work, we have exploited the concept of Densely Connected Convolutional Neural Networks (DenseNets) for fruit quality assessment. The feature propagation towards the deeper layers has enabled the network to tackle the vanishing gradient problems and ensured the reuse of features to learn meaningful insights. Evaluating on a dataset of 19,526 images containing six fruits having three quality grades for each, the proposed pipeline achieved a remarkable accuracy of 99.67%. The robustness of the model was further tested for fruit classification and quality assessment tasks where the model produced a similar performance, which makes it suitable for real-life applications.
translated by 谷歌翻译
In this paper, we assess the viability of transformer models in end-to-end InfoSec settings, in which no intermediate feature representations or processing steps occur outside the model. We implement transformer models for two distinct InfoSec data formats - specifically URLs and PE files - in a novel end-to-end approach, and explore a variety of architectural designs, training regimes, and experimental settings to determine the ingredients necessary for performant detection models. We show that in contrast to conventional transformers trained on more standard NLP-related tasks, our URL transformer model requires a different training approach to reach high performance levels. Specifically, we show that 1) pre-training on a massive corpus of unlabeled URL data for an auto-regressive task does not readily transfer to binary classification of malicious or benign URLs, but 2) that using an auxiliary auto-regressive loss improves performance when training from scratch. We introduce a method for mixed objective optimization, which dynamically balances contributions from both loss terms so that neither one of them dominates. We show that this method yields quantitative evaluation metrics comparable to that of several top-performing benchmark classifiers. Unlike URLs, binary executables contain longer and more distributed sequences of information-rich bytes. To accommodate such lengthy byte sequences, we introduce additional context length into the transformer by providing its self-attention layers with an adaptive span similar to Sukhbaatar et al. We demonstrate that this approach performs comparably to well-established malware detection models on benchmark PE file datasets, but also point out the need for further exploration into model improvements in scalability and compute efficiency.
translated by 谷歌翻译
We provide a brief, and inevitably incomplete overview of the use of Machine Learning (ML) and other AI methods in astronomy, astrophysics, and cosmology. Astronomy entered the big data era with the first digital sky surveys in the early 1990s and the resulting Terascale data sets, which required automating of many data processing and analysis tasks, for example the star-galaxy separation, with billions of feature vectors in hundreds of dimensions. The exponential data growth continued, with the rise of synoptic sky surveys and the Time Domain Astronomy, with the resulting Petascale data streams and the need for a real-time processing, classification, and decision making. A broad variety of classification and clustering methods have been applied for these tasks, and this remains a very active area of research. Over the past decade we have seen an exponential growth of the astronomical literature involving a variety of ML/AI applications of an ever increasing complexity and sophistication. ML and AI are now a standard part of the astronomical toolkit. As the data complexity continues to increase, we anticipate further advances leading towards a collaborative human-AI discovery.
translated by 谷歌翻译
Skeleton-based Motion Capture (MoCap) systems have been widely used in the game and film industry for mimicking complex human actions for a long time. MoCap data has also proved its effectiveness in human activity recognition tasks. However, it is a quite challenging task for smaller datasets. The lack of such data for industrial activities further adds to the difficulties. In this work, we have proposed an ensemble-based machine learning methodology that is targeted to work better on MoCap datasets. The experiments have been performed on the MoCap data given in the Bento Packaging Activity Recognition Challenge 2021. Bento is a Japanese word that resembles lunch-box. Upon processing the raw MoCap data at first, we have achieved an astonishing accuracy of 98% on 10-fold Cross-Validation and 82% on Leave-One-Out-Cross-Validation by using the proposed ensemble model.
translated by 谷歌翻译
基于优化的元学习旨在学习初始化,以便在一些梯度更新中可以学习新的看不见的任务。模型不可知的元学习(MAML)是一种包括两个优化回路的基准算法。内部循环致力于学习一项新任务,并且外循环导致元定义。但是,Anil(几乎没有内部环)算法表明,功能重用是MAML快速学习的替代方法。因此,元定义阶段使MAML用于特征重用,并消除了快速学习的需求。与Anil相反,我们假设可能需要在元测试期间学习新功能。从非相似分布中进行的一项新的看不见的任务将需要快速学习,并重用现有功能。在本文中,我们调用神经网络的宽度深度二元性,其中,我们通过添加额外的计算单元(ACU)来增加网络的宽度。 ACUS可以在元测试任务中学习新的原子特征,而相关的增加宽度有助于转发通行证中的信息传播。新学习的功能与最后一层的现有功能相结合,用于元学习。实验结果表明,我们提出的MAC方法的表现优于现有的非相似任务分布的Anil算法,约为13%(5次任务设置)
translated by 谷歌翻译
开发有效的自动分类器将真实来源与工件分开,对于宽场光学调查的瞬时随访至关重要。在图像差异过程之后,从减法伪像的瞬态检测鉴定是此类分类器的关键步骤,称为真实 - 博格斯分类问题。我们将自我监督的机器学习模型,深入的自组织地图(DESOM)应用于这个“真实的模拟”分类问题。 DESOM结合了自动编码器和一个自组织图以执行聚类,以根据其维度降低的表示形式来区分真实和虚假的检测。我们使用32x32归一化检测缩略图作为底部的输入。我们展示了不同的模型训练方法,并发现我们的最佳DESOM分类器显示出6.6%的检测率,假阳性率为1.5%。 Desom提供了一种更细微的方法来微调决策边界,以确定与其他类型的分类器(例如在神经网络或决策树上构建的)结合使用时可能进行的实际检测。我们还讨论了DESOM及其局限性的其他潜在用法。
translated by 谷歌翻译
近年来,多个对象跟踪引起了研究人员的极大兴趣,它已成为计算机视觉中的趋势问题之一,尤其是随着自动驾驶的最新发展。 MOT是针对不同问题的关键视觉任务之一,例如拥挤的场景中的闭塞,相似的外观,小物体检测难度,ID切换等,以应对这些挑战,因为研究人员试图利用变压器的注意力机制,与田径的相互关系,与田径的相互关系,图形卷积神经网络,与暹罗网络不同帧中对象的外观相似性,他们还尝试了基于IOU匹配的CNN网络,使用LSTM的运动预测。为了将这些零散的技术在雨伞下采用,我们研究了过去三年发表的一百多篇论文,并试图提取近代研究人员更关注的技术来解决MOT的问题。我们已经征集了许多应用,可能性以及MOT如何与现实生活有关。我们的评论试图展示研究人员使用过时的技术的不同观点,并为潜在的研究人员提供了一些未来的方向。此外,我们在这篇评论中包括了流行的基准数据集和指标。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译